- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0003000000000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Deng, Jieren (3)
-
Ding, Caiwen (3)
-
Li, Ji (2)
-
Rajasekaran, Sanguthevar (2)
-
Wang, Chenghong (2)
-
Wang, Yijue (2)
-
Behnam, Payman (1)
-
Bojnordi, Mahdi (1)
-
Cai, Yuxuan (1)
-
Fu, Jingyan (1)
-
Han, Shuo (1)
-
Li, Zhengang (1)
-
Liao, Zhiheng (1)
-
Lin, Sheng (1)
-
Liu, Hang (1)
-
Ma, Xiaolong (1)
-
Meng, Xianrui (1)
-
Miao, Fei (1)
-
Shafiee, Ali (1)
-
Shang, Chao (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Wang, Chenghong; Deng, Jieren; Meng, Xianrui; Wang, Yijue; Li, Ji; Lin, Sheng; Han, Shuo; Miao, Fei; Rajasekaran, Sanguthevar; Ding, Caiwen (, EMNLP)
-
Yuan, Geng; Behnam, Payman; Cai, Yuxuan; Shafiee, Ali; Fu, Jingyan; Liao, Zhiheng; Li, Zhengang; Ma, Xiaolong; Deng, Jieren; Wang, Jinhui; et al (, Design, Automation & Test in Europe Conference & Exhibition (DATE))As the number of weight parameters in deep neural networks (DNNs) continues growing, the demand for ultra-efficient DNN accelerators has motivated research on non-traditional architectures with emerging technologies. Resistive Random-Access Memory (ReRAM) crossbar has been utilized to perform insitu matrix-vector multiplication of DNNs. DNN weight pruning techniques have also been applied to ReRAM-based mixed-signal DNN accelerators, focusing on reducing weight storage and accelerating computation. However, the existing works capture very few peripheral circuits features such as Analog to Digital converters (ADCs) during the neural network design. Unfortunately, ADCs have become the main part of power consumption and area cost of current mixed-signal accelerators, and the large overhead of these peripheral circuits is not solved efficiently. To address this problem, we propose a novel weight pruning framework for ReRAM-based mixed-signal DNN accelerators, named TINYADC, which effectively reduces the required bits for ADC resolution and hence the overall area and power consumption of the accelerator without introducing any computational inaccuracy. Compared to state-of-the-art pruning work on the ImageNet dataset, TINYADC achieves 3.5× and 2.9× power and area reduction, respectively. TINYADC framework optimizes the throughput of state-of-the-art architecture design by 29% and 40% in terms of the throughput per unit of millimeter square and watt (GOPs/s×mm 2 and GOPs/w), respectively.more » « less
An official website of the United States government
